Meeting

Social Media and Digital Diplomacy: A Conversation With Nick Clegg of Meta

Friday, October 21, 2022
Speaker

President, Global Affairs, Meta; Former Deputy Prime Minister, United Kingdom (2010–2015)

Presider

Anchor and Senior Global Affairs Analyst, CNN; CFR Member

Nick Clegg discusses the role of Meta and social media in global politics, including its use as a platform for political movements, efforts to combat the spread of misinformation, and measures in place to support international election security.

Transcript:

GOLODRYGA: Hi, everyone. Hope you’re having a great lunch. My name is Bianna Golodryga. I’m the senior global affairs analyst at CNN. We’re delighted to have this conversation with you today.

It’s so lovely to meet you, Sir Nick. Could I call you Sir Nick, Nick Clegg?

CLEGG: Yeah. Nick is fine. Nick’s fine.

GOLODRYGA: Nick? Nick? OK. (Laughter.) Us Americans aren’t used to these titles. Nick Clegg, welcome. Welcome to the Council, and thanks for having us today, Richard.

Nick Clegg is the former deputy prime minister of the United Kingdom from 2010 to 2015. I think we’re going to start there on that note because there’s a lot of news going on right now in the U.K. But since then he’s been president of global affairs at Meta Platform since 2022 and previously been vice president of global affairs and communications at Facebook from 2018 to 2022. Welcome. Good to see you.

CLEGG: Good to be here.

GOLODRYGA: OK. What is going on in England and who is going to be the next prime minister? (Laughter.)

CLEGG: I have no idea. I can’t follow it. It seems to change by the hour.

I mean, sorry, I shouldn’t say this. I mean, my own views are my own views, not sort of Meta’s views. I can’t sort of, in a sense, abstract myself from my views, having been involved in British politics in one way or another for close to twenty years.

What can I say that hasn’t been said a thousand—I mean, firstly, I think recent events have been brewing for a long time is the first thing to remember. It wasn’t some sort of eccentric decision by Kwasi Kwarteng and Liz Truss.

This has really been gestating for six years ever since the Conservative Party and their avid supporters in the commentariat, in the newspaper industry, decided to advocate sort of flat-earth economics at the time of the Brexit referendum and they made a claim, which they still stick to, that you can pull out of the world’s largest borderless single market.

You can try and deny the geographical fact that the United Kingdom, which happens to be tectonically located in Europe, and that, magically, that’s going to make you rich or more prosperous. It kind of—it just—it was never true then and it’s become ever more untrue since.

And the problem is, which often happens when you have these big sort of political swings, which, certainly, in the British context were almost revolutionary in their impact in 2016 when you start elevating ideology over reason and sort of theology or economics with a theological flat-earth economics over evidence is that you sort of keep doubling down on it.

So they’ve been doubling down on it, doubling down on it, and doubling down on it for the last six years and then culminates, of course, in this extremely ill-advised pitch that you can somehow generate growth in an economy, unlike the United States, without a reserve—global reserve currency—so much, much more susceptible to the swings and roundabouts of the international markets with one of the largest current account deficits we’ve had in years—that you can somehow magically cut taxes without accounting for that at all, without making any savings elsewhere, and that that will magically, you know, produce growth.

It’s all part of the piece that you’ve seen develop over the last—and I think it’s very sad because I do think the world, particularly now, given the huge uncertainties and perils that abound, which are, obviously, discussed with great expertise in buildings like this—you know, I don’t want to overdo it—Britain is not the country it once was—but it would help if Britain were a reliable, thoughtful, stable government.

I mean, I was reading last night that the Irish Foreign Minister Simon Coveney said that in the six years that he’s been foreign minister he’s dealt with six Northern Ireland secretaries of state, five foreign secretaries, and four U.K. prime ministers. You can’t interact with a country if it’s just constantly in this state of perpetual turmoil.

So I think, you know, obviously, we’ve now reached an inflection point. My own personal view, unsurprisingly, is that it’s a terrible error, I think, for the Conservative Party to continue to play pass the parcel with, you know, whoever’s prime minister. This is something which should be decided by the British people.

GOLODRYGA: And it looks like it could very well be Boris Johnson returning. I mean, the defense minister, who’s a very respected man there in this cabinet currently, said that’s his choice at this point. I mean—

CLEGG: I mean—yeah, but, of course, when all the rules of the game are sort of cast aside and anything can kind of happen and, of course, Boris Johnson will have the support of—I mean, there’s a very—I think, for those of you who don’t follow British politics too much, and I don’t recommend it—(laughter)—if you want to have a sane and sensible working day, but there is this very symbiotic relationship between the extremely powerful conservative newspapers in the United Kingdom.

U.K. politics and Australian politics are the two democracies I’ve come across which are still very, very newspaper dominated, much more so than here or in continental Europe and so on and that, particularly, the Daily Mail and others they have—the interesting thing about them and the Conservative Party is that they’re both in a sort of—both, you know, somewhat so deftly embrace to hold on to what they’ve got, which is appeal to older voters—you know, all the readers. They’re catastrophically losing support from younger generations.

So, of course, from their point of view, they have a mutual interest to keep all of this out of the hands of British voters and that’s what’s going on now. I mean, could you just imagine what it’d be like if this pantomime was played out and it was a Labour government or a Liberal Democrat government?

I mean, can you imagine the headlines on the front page of the Daily Mail demanding an election by next Thursday? I mean, it would be off the—you know, it’d be off the Richter scale. But they, clearly, have an incentive to keep this—sort of to hold power in the way that they are.

I don’t think that’s healthy. I don’t think it’s healthy when the country’s going through such turbulence. I just think we need a reset. It’s an inflection point. I think there should be an election.

GOLODRYGA: Well, we should have a new leader announced as soon as next week.

CLEGG: By lunchtime. Yeah. (Laughter.)

GOLODRYGA: And we shall see.

I also jumped the gun. So I just want everybody to know that we’re going to have about twenty-five minutes where we’re going to talk and then we’re going to open it up for questions both here and then many are joining us virtually as well.

So let’s talk Meta, shall we? It has been exactly a year since Mark Zuckerberg announced the company would change its name from Facebook to Meta and it would become a metaverse company. You describe it as a logical evolution and the next generation of the internet. But some would describe it as having a rocky start thus far.

The flagship virtual reality game Horizon Worlds has been placed under a quality lockdown for the rest of the year while it retools its app. There’s reports of internal friction over strategy. One senior leader complained about the money the company has spent on unproven projects, saying it made him sick to his stomach.

So my question to you just one year in—I know it’s new—but is the metaverse vision in danger?

CLEGG: So I think the most candid answer I can give you is in as much as anyone can tell about the long-term trends of how human beings interact with each other and use technology, and given the vast investment that is going into what is, loosely, called metaverse technologies not just by us—by Meta—but, you know, ByteDance is investing huge amounts and now selling, not in the U.S. but elsewhere in the world, a headset called—you know, called Pico. Apple is rumored to come out with a new device next year. Microsoft is investing a huge amount in this.

I think you’d be very foolish to assume that with that kind of momentum, that level of investment across the industry, and the logic towards this new computing platform, which I’ll come to in a minute, I think you’d be pretty foolish to assume that this isn’t going to materialize in some shape or form in the coming years.

Will that trajectory be smooth? No. There will be ups and downs. There will be peaks and troughs. There will be some products that will work and some experiences that take off and others that flounder.

Is it easy to predict now when this will develop mass sort of societal appeal? No. I think it’s going to be many, many years in the making. And is it easy to predict which of the players that are investing now will kind of somehow come on top? No, none of that is predictable.

But so I think it is relatively safe to assume that the—in the same way that we moved from, you know, desktops to laptops now, I bet you every single person in this room now has a mobile phone. Either you’re gripping it in the palm of your hand or it’s on the table or it’s in your pocket.

We live on it all the time. There’s no law of nature that says we’re only ever going to be stuck with, you know, mobile phones in our hands forever. The assumption of people who are proselytizers for new AR/VR technologies is that we will eventually graduate to something that we put on the—we perch on the bridge of our nose in which through advances in optic technology and other forms of technology will allow us to communicate using this rather than that, and that alongside that hardware shift there’s a discernible shift towards deploying technologies which provide people with an ever more immersive sense that they are really occupying the same virtual space even if they are physically apart.

And if you think about—if you think about it, you know, we’re always glued to our keyboards with long texts. Then we shifted to shorter texts and then we shifted to, you know, photos when that became possible, and now we’ve shifted—a big shift, which TikTok has really pioneered in the last year or two—to short form. Short form video is now about 50 percent of content on Facebook and, I think, even more so on Instagram.

What does all of those shifts represent? They suggest that wherever human beings have the opportunity to share ever more lifelike experiences of each other they will do so, and the promise that this new computing platform—it’s very important to call it a new computing platform. It’s not a new piece of hardware. It’s not a new app. It’s not a new gizmo. It’s not a new game.

You’re almost rebuilding the internet all over again. New operating systems, new services provided by the big players, new apps on top of that. You’re, literally, rebuilding the whole stack anew and what—and the promise

that it holds out is that in the future—and this is not—this is not a—this is not some sort of science fiction prediction—all of us in this room will be able to put something on the bridge of our nose sitting comfortably at home and we would congregate in a virtual room, which we could make look almost identical to this. And you and I would feel as we’re talking to each other that we’re breathing the same air, even though we might be thousands of miles apart.

If that sounds fantastical, I mean, I, for the last year or so, have been holding my weekly meetings with my team in something called Horizon Workrooms, which is our sort of nascent product where people can work together in these virtual spaces, and partly because of the audio technology it is quite extraordinary.

Even though we are sort of—slightly sort of cartoon style-looking avatars at, you know, the current maturity of the technology, you feel you really are sitting around the same table. And that’s the bet that not just Meta—and, by the way, I should stress this. The metaverse in as much as it will come about will come about regardless of whether Meta the company exists or not.

Meta could disappear from the surface of the planet—in fact, there are some people who would rather like us to disappear from the surface of the planet now that I kind of think about it tomorrow—the metaverse will still be built.

GOLODRYGA: But has Meta—

CLEGG: It’s not going to be built by or owned by any one company.

GOLODRYGA: But has Meta built and invested too much into this too soon?

CLEGG: Well, look, I just don’t think you get these transformations without people being prepared to make big risks and really big bets, and one of the things—one of the reasons why I, as, obviously a nonengineer—I might—you know, I’m a refugee from British politics—(laughter)—one of the reasons I—one of the reasons I enjoy working with Mark Zuckerberg is he’s still prepared to make big bets and they’re bets without a guarantee.

Of course, no bet has got a guarantee. But if people like that don’t try you don’t get the progress because, yes, you do require people to have a real strategic sense of vision, to believe that new improvements on the current technology we have can be developed, can visualize that, and are then prepared to pour huge amounts of money and huge numbers of engineers in trying to build them, and I think it is totally right and normal.

You know, you get the normal snark, oh, it’s not happening overnight as people want. There’s a VR entrepreneur who I—whose name I’ve momentarily forgotten. I read he had a lovely comment the other day, which I read in one of the papers, where he said, look, it’s like trying to land someone on the moon for the first time and people are complaining that the coffee machine doesn’t work.

Of course, it’s going to be bumpy. Of course, certain things are not going to evolve exactly as you’d want. But my own view is, and I can’t stress this enough, Meta has been very prominent because of the rebranding of the company and because of Mark Zuckerberg’s advocacy of it.

But we’re not alone. It can’t be built by Meta on its own. I’d say it shouldn’t be built by Meta on its own. It’s got to be—for it to be successful, actually, you will need a high degree of interoperability between the different companies.

You’ll need a high degree of, yes, in effect, collaboration so that as people move—because the worst thing, by the way, and the final point I’d say on this is the worst thing is if everybody builds their own metaverse, if I could put it like that, in their own silo because that means that all of you when you use these technologies in the

future, which I suspect you will at some point, it’ll be a great shame if you end up feeling you’re locked either in a Microsoft metaverse or an Alphabet metaverse or an Apple metaverse and you can’t cross from one to the other. It’d be a bit like, you know, us not being able to send an email from Gmail to, you know, a different email provider.

The internet has developed, sometimes almost by accident, sometimes rather organically through the private sector just getting their act together, sometimes by standard-setting bodies, allowing us to move as seamlessly as we can across the current internet, and you’ll need something similar in the metaverse and that will require a collaborative approach between the industrial players and regulators and thinkers and standard-setting bodies, which is only very nascent at the moment.

GOLODRYGA: So let’s talk about some of the work that you’ve—and the improvements that you’ve promised to make on some of your more popular platforms, and that’s Facebook and Instagram.

And it’s been also a year since Frances Haugen, a former Facebook product manager, released thousands of documents and testified before Congress, arguing that Facebook products and Instagram harmed children, stoked division, and weakened our democracy.

So, since then, you—the company—have said that this is the following that you would work on. You would work on Instagram Kids—that the work on Instagram Kids would be paused. Is it still paused?

CLEGG: It’s still paused. Yeah.

GOLODRYGA: The company would introduce new controls for adults of teens so they can supervise what their teens are doing online. Have you done—

CLEGG: That’s been introduced. Yeah.

GOLODRYGA: Introduce something that when you see—when your system sees that a teenager is looking at the same content repeatedly and may not be conducive to their well-being, you will nudge them to look at other content.

CLEGG: Correct. Yeah, we’ve introduced that.

GOLODRYGA: And introduce take a break measure from the platform overall.

CLEGG: We’ve introduced that as well. Yeah.

GOLODRYGA: So what has been the takeaway moment, the learning curve, that I think that you have gone through over this past year from these revelations? Because we’ll get to January 6 and the—

CLEGG: Sure.

GOLODRYGA: —inciting violence and division and all of that. But just from the Instagram issues related to harming children. And I know it was one out of five. It’s not every child, and there’s a lot of benefit that people do get out of Instagram as well. But to—

CLEGG: Can I—sorry. This is going to sound unduly defensive.

Can I just dwell on that last one? Because it’s quite important in this. I would have thought everybody in this room—I, certainly, do—you want these big companies to do research to work out who is getting a good

experience from using the technology and who is getting a bad experience precisely so that research could be done so that new measures can be introduced, some of which you’ve mentioned.

By the way, that is a small list of a much longer list of measures we have introduced recently—greater parental controls, greater age verification, prompting young teenagers who are dwelling on the same kind of content to move elsewhere, defaulting them into private accounts, and so on and so forth.

And my worry about the whole way in which that was portrayed was, firstly, it took something which wasn’t really researched in the academic sense—and just to dwell on this for a second—the main thing that was cited at the time in all of that foolery was a focus group of forty teenagers who had been invited to join the focus group because they already had said themselves that they suffered from a range of issues which we’re very familiar with, for those who’ve got, you know, teenage kids, which is sleeplessness, anxiety, body image issues, issues of insecurity, about social comparison, and so on.

And then what the folk did in that focus group is they said, look, when you use Instagram, given that you are already enduring issues like lack of self-respect, sleeplessness, anxiety, and so on, does using Instagram make you either feel better, the same, or worse. And a third said it made them feel worse and two-thirds said it made them feel better or the same, which is, in many ways, not wholly unsurprising because we know that, you know, from the world of fashion magazines that people who are worried about their own body image sometimes find the experience of looking at, you know, ideal types in a fashion magazine, you know, makes them feel worse.

It wasn’t a sort of wholly unsurprising observation, and for that to then have been portrayed—I really—I feel quite strongly about this—as us somehow secretly knowing that we were deliberately doing damage and not wanting to do anything about it is such a grotesque inversion of the truth.

It was a focus group to ask how people feel who already have those issues precisely that we can design products to constantly not solve all their problems—we’re not going to solve sleeplessness. We’re not going to solve the dilemmas that teenagers have had since the dawn of time.

But we can try and make sure that there are tools there where the—where, thankfully, the majority of them, the vast majority, felt the same or better, could continue to do so in ever larger numbers.

And my worry is that because of the way it was so—it was—I think it was distorted—as I said, it was some sort of conspiracy and some secret thing that we were indifferent to, it actually discourages the industry from doing the very research that we want people to do in order to improve those products. So—

GOLODRYGA: I mean, I guess the question is why weren’t some of these measures introduced prior to these revelations.

CLEGG: Well, but—well, that, I think, is a fair—a perfectly fair argument. The truth is these companies and these technologies are quite—still quite young. I know everyone thinks we’ve lived in the sort of Facebook and Instagram era forever. But you know, Facebook is—Meta is, what? Is only just over a decade-and-a-half old. As I said until recently before he retired, because it was a very good line—and I can’t say it anymore—that Facebook—Mark Zuckerberg and Chris Hughes and others founded Facebook, I think, the week after Roger Federer became number one in men’s tennis for the first time.

The Federer era was almost as long—unfortunately, I can’t say that anymore for very long—as Facebook. These are young technologies which erupted in scale. We haven’t suddenly woken up and said, oh, gosh, we should ask ourselves what is the effect on young people—what tools can we deliver.

There are a whole range of tools we delivered previously including, by the way, designing a kids-specific experience on Instagram, which you could argue is exactly what we should be doing to make sure that there are particular protections and experiences for young people.

Because of the way that this was portrayed we decided to push pause on that because we knew exactly what would happen as we went ahead with it. Instead of the intention, which is to provide a specific ultra-safe environment for youngsters—we have something called Messenger Kids, for instance—it would immediately be portrayed as us somehow, in a sort of sinister way, trying to get our claws into young people.

We have to—we occupy, I’m afraid, the reality that young people are online for huge amounts of time. Huge numbers of youngsters are online well below the age they probably should be and well below the age we want to be. We can’t change that.

We have to be realistic about that and make sure that we design products and do the right research so that we act responsibly. We did that. We continue to do that. We will continue to do that. I think it’s perfectly fair for lawmakers and others to press us very hard on that, to condemn us where we haven’t got it right, to push us to move faster.

But the fundamental allegation that we kind of are indifferent to harm online, don’t care about it, and sort of want to let it sort of run amok without trying to kind of stem it is just a ludicrous and, I think, sort of deeply disingenuous allegation, which in the case of that whole—(inaudible)—actually inverted the very purpose of what we were trying to achieve.

GOLODRYGA: Let me ask you—I think we have time for one or two more questions before we open up to the audience—many people believe that Facebook algorithms fanned the flames to lead up to the January 6 insurrection. Are they wrong?

CLEGG: Yes.

GOLODRYGA: Why?

CLEGG: Because I think the people who are responsible for the January 6 insurrection, as the inquiry is quite rightly identifying, are the people who propagated falsehoods about that election, sought to try and interrupt the peaceful transfer of power in a democracy, and whipped up crowds of people to take matters into their own hands at the Capitol.

Honestly, we cooperated before, up to, through, and beyond with law enforcement. We have provided huge amounts of material to the January 6 inquiry. We had banned big groups like you know, Proud Boys, oh, three, four years prior to that.

We acted against the “Stop the Steal” movement before January 6. Since then, we’ve taken significant measures, including suspending Donald Trump for at least two years from using our platforms.

GOLODRYGA: And you said he may come back.

CLEGG: He may for the simple reason that we were very clear that, unlike Twitter, who imposed a permanent ban on him, we believe that any private company—and this is really regardless of one’s personal views about Donald Trump—should tread with great thoughtfulness when seeking to, basically, silence political voices.

We just think that’s an—you know, do people really want to live in a world where Mark Zuckerberg, you know, not elected by anybody can just simply silence a politician because, you know, they don’t—other people don’t like them?

No. We suspended him because we felt that what he did on our platform at that time was such an egregious breach of our rules that it justified this exceptional measure. We will look again at the facts as they exist.

GOLODRYGA: But have you seen anything on his Truth Social platform or his public statements that suggest to you that if he is invited back to Facebook that his actions and the similar actions that he took to get him off of Facebook won’t continue? Has he changed, in your view?

CLEGG: Anyone—it doesn’t matter who you are. You can be the president of United States, ex-president of the United States, the Pope, the king—it really doesn’t matter, you know, who you are—if you break our rules on hate speech, on incitement to violence and hatred, we will act against you or against your accounts or, in this case, we might suspend you. And, of course, we do suspend some accounts of people permanently.

We publish our community standards. I think we are now the only tech company that every twelve weeks at the same cadence of financial reporting we publish all the material that we remove. We now both measure the prevalence of things like hate speech on our platform, which is now—again, we’re the only people to do this—is now audited by EY as a third party auditor because we accept that no one’s going to accept our word as if, you know, we’re judge and jury of our own homework.

So we’re very transparent about that and we’re very transparent that those rules apply to everyone. But that is quite different to being invited to take decisions which, in the end, are decisions which should be taken by the democratic political process and it’s why, for instance, in a separate area of our policy and enforcement, one that is a controversial decision but, I think, from first principle is correct, we refrain from fact checking the direct speech of politicians.

Now, why do we do that? So we apply our community standards. Doesn’t matter who you are. It can be a politician or a nonpolitician. If you commit hate speech and incite hatred and violence, we’ll act against you.

But what we don’t believe we should be doing as a private sector company—and, by the way, just to sort of to maybe take it to a slightly less emotional space than the United States, in my country, in the United Kingdom, like clockwork in every single election the Conservatives—I think they’re going to struggle to say this next time—say that the Labour Party will ruin the economy and the Labour Party say that the conservatives will privatize the NHS.

They’ve said this since the dawn of time. They always said it. They always exaggerate it. Everyone does this in politics. They always exaggerate their virtues and exaggerate the vices of their opponents.

The idea that we step in as a private sector company and say, well, actually, the claim you make about Labour, you know, the Conservatives about the NHS is not quite right here, I think that would lead to a degree of highly politicized and privatized and unaccountable censorship, which, rightly, in a democracy belongs to the people and to open debate and to scrutiny from people like you on mainstream media, not for us.

So that’s how we—that’s how we adjudicate our rules. That’s why we’ve come to the views that we have, and we need to strike a balance between making sure that we enforce our rules rigidly without fear or favor but we also don’t end up almost toppling into arrogating to ourselves an inappropriate role for a private sector company in a mature democracy.

GOLODRYGA: You’ve spent a lot of time talking about transparency and you, in the past, have said that you welcome more regulation. Can you be specific about what legislation you would welcome in terms of regulating Facebook at this point?

CLEGG: Oh, I love this. I love this. I mean—

GOLODRYGA: I mean, we can talk about that the Europeans have passed just—you know, the Digital Markets Act and the Digital Services Act.

CLEGG: Yeah. But, I mean, look, it is amazing to me than in the United States there is still no federal privacy legislation. That seems really curious to me why some people have got privacy protections in law in some states and not in other parts of the country.

That’s an obvious area where it seems to me there is a huge need for a clear level playing field, never mind for companies like Meta but just for U.S. citizens, to know what their rights are to data privacy regardless of where they live.

Elections we’ve just talked about. It would be so much more preferable that we were, basically, told by Congress what the rules of the game are in terms of where we should draw the line about what is and is not acceptable.

You’re asking me questions, which, in a sense, is preposterous. What I’m telling you as an employee and an executive of a private sector company where we draw the line on what is and what is not politically acceptable. It’d be great, it would simplify our world, our life, enormously if we said: We’ll just follow the law. It’s not our democracy. It’s not our elections. It belongs to the American people and their representatives in Congress should set the rules.

Data portability is another good example. If you really want proper competition in the online world, the more people can move their data from one service to a competitor’s service the better.

The private sector is never going to make that easy on its own, and one of the reasons why they won’t, partly for commercial self-interest, but it’s also because there’s a tension between moving your data around and the need to protect data use and data privacy because, of course, the more you move it around the more you can end up losing it or it can fall into the wrong hands.

That is the classic dilemma between privacy, on the one hand, and data portability, on the other hand, that should be decided upon by legislators, not by the private sector.

I could go on. Hate speech. We have a definition of hate speech. Lots of people think that it should be wider. Some people think it should be narrower. In this country, roughly, half the population think we censor too much. The other half of the population think we censor—we don’t censor enough.

I mean, the left and the right now in the United States have diametrically opposing views of whether companies like us are doing a good job. The left think we don’t take down nearly enough and the right think we take down too much. It would be great if the right and the left could agree on legislation. We’ll follow the rules.

I could go on. There are so many areas, it seems to me, where it would be much more appropriate for legislators and regulators to set the rules rather than have the private sector do it and then have everyone to shout at the private sector for the failure of l

GOLODRYGA: Well, I have more questions that I, selfishly, would like to ask you but let’s open it up to the audience here. We’ll take one live. We’ll go back and forth between—we’ll toggle between live and then virtual.

So, but right here, this lady in the front.

Q: Thank you, and good afternoon.

It seems you’ve just said, not in so many words, we’re too big and we’re being asked to do what the government does precisely because we’re so large but we’re so large we shouldn’t really have that much control over people.

And so the first—you know, maybe you can respond to that. And then second, one of the reasons you’re so big is that you spend so much money on growth and on being, pardon me, addictive so that people come and spend more time on you and then regret it—you know, then they say, oh, I shouldn’t have stayed up so late, I got too—whatever—and not enough on the moderation, providing the appropriate friction, doing—you know, not making political judgments but, nonetheless, as I understand it, you moderate in English pretty much a lot but not in all the other languages of many other troubled countries.

CLEGG: I’m afraid I’d disagree with almost every one of those assertions. So let me see how I can be helpful.

We’re very big, absolutely. Close to half the world’s population uses Instagram, Facebook, Messenger, and WhatsApp. It is undeniably true.

Interestingly, the areas which are of the greatest growth don’t even have any of these so-called nefarious algorithms trying to hook people onto it. I mean, WhatsApp is growing faster than any of our other products and has no algorithm at all. It’s, literally, just human beings wanting to communicate with each other.

Q: (Off mic.)

CLEGG: Well, it is actually, arguably. You could argue—if you go to India, where over 400 million Indians are using WhatsApp to speak to the Indian government, as they say to me, they think it is a major problem and there are lots of governments around the world who want to stop the end-to-end encryption on WhatsApp precisely because they want to be able to surveille what people are saying to each other.

So, no, this is—look, all human communication is controversial. You know, anything which facilitates the ability for people to express themselves and to reach out to others with greater freedom than, perhaps, has, arguably, ever been the case, A, will grow very significantly in size because human beings like doing that. People like communicating, particularly with family and friends. They like expressing themselves freely without being told what to do.

And we have a business model, which is often criticized, but I actually think has one ingenious virtue to it and it is this, which is because we’re paid through advertising no one needs to pay up front to use these apps and that means that you can be a farmer in Bangladesh or a fancy lawyer in Wall Street. You’re still using these apps in exactly the same way and I think that is a—that has, in my view, had a profound democratizing effect in terms of human expression.

There is—you’re quite right there is a very familiar refrain, and books have been written about it and columns endlessly written about it and television programs have been written about it, and it goes, roughly, like this, which says these companies have only grown because they have these clever things called algorithms and then they have this science called AI—and I’m being a little bit facetious on purpose and I’ll come to the reason why—and that kind of makes our reptilian side of our brain to do things that we regret later. It forces us to kind

of constantly, you know, get hooked on this stuff and we actively promote bad content, hate speech, extremist content, and so—

Q: Viral content—(off mic).

CLEGG: Well, hang on. But—well, sorry, we’re jumping around. Virality, since you raised it, is a really interesting issue. I agree social media—one of the things that social media has done has facilitated virality.

Virality isn’t always bad. The fact that huge numbers of human beings can mobilize in political movements—why do you think the Arabian authorities are trying to clamp down on Instagram at the moment? Precisely because it is a technology beyond the control of authoritarian and semi-authoritarian regimes which allow human beings to mobilize and so on.

But the point I was trying to make was I think all of this refrain, which, of course, I read and hear constantly, oddly enough it—we’re not angels. We’re a private sector company. I’m not trying to persuade everyone that, you know, the people in Silicon Valley, you know, walk on water. Far from it.

But think about it from our point of view. Our commercial incentive is to continue to encourage advertisers to run ads online. The advertisers themselves, the people who pay our lunch, don’t want their content next—their ads next to all this awful content that we are constantly alleged that we promote.

I mean, it would be illogical for us, in fact, so much so some of them suspended running ads on Facebook for a while, which is one of the reasons why, for instance, now, if you look at the report that I alluded to earlier, we use these algorithms far from promoting bad content to suppress it.

So hate speech now—the prevalence of hate speech on Facebook stands at about 0.02 percent. That means for every ten thousand bits of content you’d see you might see two bits of hate speech.

I wish it was zero. It’s not going to be zero because it’s legal speech. Why are we investing so much money, so much engineering capacity, to suppress the very content that we are often alleged to promote?

We do that, A, because people prefer it. They like it. People don’t want to be wound up or feel addicted or angry. And, by the way, again, because I often hear that people say, oh, you’re just doing this to give people this instant dopamine hit.

We deliberately survey users a day after they’ve used our services so that they have time to think in slow time about whether they have found it worth their while when determining how we rank content on people’s feeds precisely because we accept that human beings in the instant have a different reaction to when they are sort of reflective in a quieter moment because we know that’s not only good from a user point of view but it’s good from our point of view because that means people will continue to use our products more sustainably. It means it’s easier for advertisers to run ads without it being next to some horrible piece of hate speech.

Are we perfect? No. Could we move faster? Sure. Could we—

Q: (Off mic.) (Laughter.)

CLEGG: Well, no—one thing I will accept, I think, whilst we might disagree on the caricature, and I do think it’s a caricature and I, personally, think what you’ve seen in recent years is a, I think, an almost silly mood swing from the sort of tech euphoria, the sort of tech utopianism of the past, and social media at the time of the Arab Spring, you know, could do no wrong. It’s going to be the solution to all our problems.

Now it’s swung completely the other way. There’s a whole cottage industry of people who now claim that everything, everything that people don’t like from climate change to an election outcome they don’t like, has to be the fault of social media.

Neither are true. Neither are true, neither the excessively optimistic nor the excessively pessimistic depiction. But one thing which I think is implicit in your questions, which I accept, is this, and this is a societal question. Do we or do we not welcome the ability for all of us to communicate to each other with less and less friction?

You’re right, it is a very frictionless way of—it isn’t entirely frictionless. So, for instance, yesterday, I think, we announced that we are now using new nudge technology so that when people try and reach celebrities, for instance, on Instagram we will now have a little popup message saying just think of what you’re going to say. Please don’t be abusive. Try and be kind.

You know, we’ve worked with a lot of the nudge—Richard Thaler and others—the nudge theorists because we think this is a way of getting people to conduct themselves in a civil way.

So we don’t believe in completely frictionless communication. Of course not. Otherwise, you wouldn’t have those community standards and we wouldn’t remove as much content as we do.

But the fundamental point that technologies as they evolve tend to allow human beings to do things more easily than they—than previous generations could, yeah, that is definitely—that’s definitely what’s happened in the internet age. And I totally accept there’s a wider debate about whether we’ve got the balance right.

GOLODRYGA: It’s not just you. I mean, I think there’s a question as to whether technology that promotes engagement also promotes extremism, and it’s not just with regards to Facebook. It’s other social media companies.

CLEGG: Sure.

GOLODRYGA: I do want to get to one of our viewers online, right. And, please, I should note, introduce yourself before you ask your question.

OPERATOR: We will take our next question from Peggy Hicks.

Q: Thanks very much. Peggy Hicks. I’m at the U.N.’s human rights office here in Geneva. Good to talk with you, Nick.

We want to talk a little bit more about the impact of Meta and Facebook in diverse environments, in the Global South in particular. And one of the things the Facebook oversight board has really brought attention to is the amount of investment in translation in broad—other languages and the amount of content moderation that Facebook is able to do in other contexts.

This came through, for example, with the fact that the community standards, which are the rules by which the platform operates, were only translated into Urdu and Punjabi in 2021, which, of course, language that have hundreds of millions of speakers, and then more recently on the Israeli/Palestine conflict in May 2021 where Facebook undertook a due diligence assessment based on concerns that were raised about the content moderation that was done then and the due diligence found that there was, indeed, not sufficient Arabic content moderation and that there were problems with the amount of resources invested.

So the question is, how are you looking at that question now? What is the standard for how much you invest if you’re going to go into a market? Are you investing enough in terms of the translation and content resources—content moderation resources—that you need? What will you do to get that balance right in the future?

CLEGG: Yeah. Thanks for that question, Peggy.

I think the first thing we should do and you’ve alluded to yourself is hold ourselves or hold our feet to the fire. It wasn’t anyone else who conducted the due diligence into whether we had the right resources for content moderation in the Middle East. We did that ourselves.

We published it. We published it and lots people then took the criticisms in the report and bashed us over the head with it. But I think that’s the right thing to do. We published it because we feel we need to be held to high standards.

So, first, accountability. The oversight board you mentioned—the oversight board—again, no one else has done this. We established this fully independent oversight board that has the binding rights to impose upon us decisions when there are appeals from users about whether our decisions on taking down content or keeping it up or right or not.

I think it has been a really important innovation. I think it still needs to kind of grow into itself. I think it needs, particularly, to work with maybe slightly greater velocity than it’s managed to do over the last couple of years.

But, again, no one else has done that. So first pillar in all of that, Peggy, is accountability, is scrutiny, is transparency.

The second one is technology. I cannot exaggerate to you how exciting it is to see the leaps and bounds in AI technology when it comes to language and translation. We now have some of the most advanced AI translation tools anywhere in the industry.

So I’ll give you a very good example. Last week, we announced that we now have the ability through our AI systems to be able to translate from a purely oral language, Hokkien in Asia, into any other language through our AI tool. No one else has managed to do this, and the velocity with which our AI systems have allowed us to move away from the much more cumbersome way in which translation used to happen in the past where you would take, particularly for highly localized languages, languages in parts of the world where technology has not been traditionally very mature—the traditional way of doing it was you’d take it in—you’d take the content in one language, translate it into a pivot language—English, Spanish, whatever—and then translate it into another.

Our AI advances have allowed us to do it immediately from one language, however rare, into any other. So we’re able through technology—so transparency is the first pillar. Technology is almost by the week transforming our coverageability on language. And then we have to translate that into the tools that we deploy and let me give you a very specific—actually, a U.S. example.

One of the things we discovered in the Indian elections—was it last year, 2021? Yeah. Was a very interesting thing was that we discovered that many users of our apps in India, even if they used their apps with English as the main language setting we could see that they actually were engaging with content that wasn’t in English, which was in—in other languages.

So we learned from that. We decided, well, why don’t we try where we see that elsewhere, where we see that someone has their account setting in English but they are—a very apposite example in the U.S. in the run up to the midterms actually engaging all the time in Spanish content from politicians and campaigns, shouldn’t we

immediately ourselves then default to that language in terms of the notifications? We send them notifications, for instance, about where to vote, when to vote, and so on.

And I think I can—I need to double check, but I’m now pretty sure in the U.S. in the run up to the midterms, which are, what, just seventeen days away, if you are—it doesn’t matter whether you speak Arabic or Chinese or Korean or French or Spanish—if you’re consuming content in those languages rather than the English language setting of your account you will automatically receive from us notifications about the election in that language.

And I just think that’s, I hope, an interesting and, I think, a really a useful specific example of using new technologies to reflect how people use different languages interchangeably, which we just couldn’t have done even a couple of years ago.

GOLODRYGA: OK. Here in the corner. David, introduce yourself.

Q: Thank you. I’m David Kirkpatrick. You might have heard of me. I wrote a book about your company.

I don’t know if people can hear me. But it’s very distressing listening to you, although I have to say, first of all, it’s very good you’re here and much overdue. I don’t believe such an event has ever occurred, despite the fact that you’re an American company with 3.5 billion users, which is, I think, very distressing because you need much more dialogue with your critics, which definitely includes pretty much everyone in this room and on this virtual session, as you well know.

And it’s really distressing that you’re so defensive. But I can see why because a year ago when Frances Haugen released an enormous amount of information, way more than was addressed here—I mean, this thing about teenagers was a relatively minor thing that to your good fortune really became a big cause célèbre, when in reality most of the egregiously embarrassing revelations had to do with global events and the treatment of people in countries around the world that you have such a huge impact on.

So I guess—I’m going to try to make it a question. But you said that, you know, it’s great that Mark Zuckerberg will make big bets. The reason we are all—not just me, almost everyone—is so distressed with your company right now is that a year ago when at the height of these revelations and all of us were beginning, even people unlike myself who have not studied your company intensely, were starting to realize the vast range of harms that you were participating in or contributing to in one way or another—

GOLODRYGA: OK. So, David—

Q: —you made a huge bet and it was a bet to do something completely different, and ever since then your CEO has, essentially, said nothing in public about all those other harms and has acted as if all that matters is this coming metaverse which was so important you had to change your company’s name.

And let me tell you, that was dishonest and extremely distressing. And thank you.

GOLODRYGA: David, I’m going to—David, that wasn’t a question and that wasn’t fair.

CLEGG: No, but I’ll try—I’ll try and explain. I mean, I’m not—I’m not here to try and distress you and I’m not here to—and I’m not here and I hope—apologies if I come across as unduly defensive—I do think I’m entitled to puncture what I think is a consistent narrative, which is often fact free, about how our systems work, how social media is constructed, and what the incentives of the company are.

You, David, are entitled to disagree with it. Of course, I can—free to disagree with you. I think, in fact, the fact that this is a rarity, I agree with you on that. We should do this more often. But forgive me, you will, I’m afraid, hear from me quite a different narrative to the one which has been very prevalent in the press and others. And when I respond, as I think I rightly do, at the time, as you alluded yourself, to the Frances Haugen press cycle, it was a press cycle about that particular study.

It seems to me I’m perfectly entitled, since no one else puts the alternative view, to say, well hang on, what are the facts, and what I’ve tried to do this morning and what I always try to do in all my work is present facts because there are so many allegations about systems and harm and a lot of kind of emotive language thrown around.

I don’t believe close to half the world’s population would be using our products if they all found it a dreadful and hateful experience. That’s, clearly, not the case. I read so often, for instance, that Facebook is awash with political content. Political content is now less than 3 percent of content on Facebook.

We don’t necessarily want it there. In fact, we’ve made big changes in recent months to reduce the amount of political content on Facebook because guess what? The vast majority of normal human beings use Facebook for what I call babies, barbecues, and bar mitzvahs to share pictures of their holidays and their kids and so on and so forth. That’s what I use it for.

And I just kind of sometimes think so much of the debate around social media, as I said earlier, has swung from excessive utopianism to almost extreme pessimism. I guess all I’m trying to do in my job is to say if you want the pendulum to rest somewhere in a more balanced way let’s have a greater respect for the facts from the allegations and let’s have regulation so that there are clear guardrails within which these companies operate. That, in the end of the day, is the way that we’re going to solve these issues rather than just yelling at them with sort of cardboard cutout caricatures of the way that they work.

Q: So have more—

GOLODRYGA: Let’s go—let’s get in the back there.

Q: Hi. My name is Sherry Hakimi and I am going to ask a question that’s couched in a personal experience.

I am an Iran policy expert and I run a gender equality nonprofit organization. And given everything that’s happening in Iran at the moment with the feminist female-led protests, I was invited to speak with Secretary Blinken last Friday. And in the wake—

CLEGG: I’m very sorry, I can’t hear.

GOLODRYGA: Could you speak up just a little bit?

Q: Sorry. I was invited to speak with Secretary Blinken last Friday. And in the wake of that meeting I have become the target of a vicious social media attack and I have received over forty death threats on Instagram, and I have been reporting them since last Friday. There is no—I understand that there are community guidelines, but in the reporting mechanism there is nothing for death threats. There is bullying and harassment, and there is violence and dangerous organizations. And in six days since I’ve reported, I have not gotten a response on a single one. And I’m curious, why don’t you have something to respond for death threats, which are probably the most dangerous thing that can happen on your platform?

CLEGG: Well, I’m very, very sorry, indeed, that you find yourself in that position. You shouldn’t be receiving those death threats. They are contrary to the rules that exist on Instagram.

I’m exceptionally distressed to hear that you feel you have not been protected by the mechanisms in place. Without knowing more detail I can’t, obviously, comment. You know that you can, obviously, impose hidden words, block accounts, block language, and report that.

You shouldn’t be in—find yourself in the position that you are. I am sorry. Just without knowing more detail—if I can, I would very—be very keen to—

GOLODRYGA: Maybe you can speak offline afterwards.

CLEGG: Yeah.

GOLODRYGA: Can we—right here in the corner.

Q: Thank you. My name is Ani Zonneveld, Muslims for Progressive Values.

I’ve also had similar experiences as this lady here. But fortunately, I have an in with Facebook with Shaarik, who I can email directly and tell him, hey, this is what’s up, and so he’s able to assist me and other affiliates throughout the country who have received death threats.

I’m wondering if there is a way that you can improve your reporting system in situations like ours, if there’s a way for nonprofit organizations or human rights organizations that are legitimate, that are of ECOSOC status, for example, or that are—have a track record of defending human rights, if we can—if there’s a direct link or a database that you can create that when we do complain directly to Facebook it is taken seriously.

Because what’s happened, for example, in India there is, like, millions of hate speech and advocacy for violence against religious minorities and it doesn’t trickle up enough quick enough. There’s got to be a shorter cut for Facebook to create a system that really helps alleviate some of this very violent language on the platform. Thanks.

CLEGG: So I, basically, agree with you. Maybe Shaarik may have explained to you we’ve recently as of last year instituted something which didn’t exist before—arguably, should have existed—but we now have a fully-fledged corporate human rights policy.

We report every year transparently on whether and in what way we think we are honoring the commitments that we make under human rights umbrella entities like the GNI that you’ll be familiar with.

That then, in turn, gets scrutinized and audited by human rights organizations. We have employed, again, I think, two or three years ago a director of human rights in the company who, ultimately, is accountable to me with a team. They work with a network of human rights organizations around the world.

We have as, you know, a number of what we call trusted partners in each country who we work with, including human rights organizations, precisely that so that they have an accelerated route by which they can flag what they see on the ground.

That is one of the things that we instituted not least after the terrible violence in Myanmar some years ago, and we also have policies, some which have not worked, candidly, as well as they should have done in the outset but I think we’re improving them, to make sure that we provide particular due diligence to content that is aimed at—whether it’s human rights activists, journalists, or others, particularly in high-stress, sort of high-violence environments.

There’s a particular program, for instance, called CrossCheck, which, actually, the oversight board which I mentioned earlier is looking at right now because I think there was some legitimate criticism that the CrossCheck program, which is designed precisely to help those who are working in these very difficult circumstances, to make sure that there is an additional layer of due diligence to make sure that any content decisions properly protect their interests and their—if necessary, their anonymity.

So, you know, I think we have made significant advances in the last two or three years. Perfectly legitimate to say should we have moved faster. I think there’s more that we can do. But I, definitely, think that these are significant improvements of where we were a short while ago but if there are any—again, if there are any specific things that I can help with or my team can help with.

GOLODRYGA: I think we have time for one more question. Right here in the white jacket. (Inaudible.)

Q: Hi, Nick. Nayeema Raza with New York Magazine. I executive produce a podcast called On With Kara Swisher. She says hello. We hope you’ll join us soon.

I’m curious about the January 6—you know, about, actually, the Donald Trump decision. You’ve said that you’ll be making this decision with input, of course, from other executives at Meta.

I’m curious if you can share more about the process and principles by which you’re making that decision and, specifically, are you watching and weighing the information coming out of the January 6 hearing.

CLEGG: There’s not much more that I can add to what, I think, we’ve said, which is it was a two-year period, expires in early January. We work with experts. We’ll, obviously, look at the data about circumstances on the ground.

At the end of the day, we have to make the judgment about what we think happens on our platform. I should stress that if Donald Trump were to return to our—I mean, it’s not a sort of—it’s not a sort of free for all. You know, rules would still apply. Scrutiny would still apply.

All the qualifications and conditions that apply to the use of the platform by him or anyone else would still apply and, yes, of course, we take in all the input from the January 6 inquiry and others to make that assessment.

And I suppose the point I was just simply making earlier was that in a country as politically polarized as it is, I totally understand for people who are fervent opponents of Donald Trump it’s very natural to say he should never be able to appear on social media again.

At the end of the day—you know, at the end of the day, whether, you know, Donald Trump succeeds in politics or not is a democratic question which will be determined in the ballot box, and we are also mindful of our role as a private sector company not to overstep the mark in what are, in the end, intensely political questions.

Q: (Off mic)—process or—(off mic)?

CLEGG: So we have a team, obviously, responsible for taking in—ingesting, if you like, all those data points, which is exactly what they’re doing now, and we’ll make announcements in due course around the time that that two-year period, which we always said at the time was, clearly, you know, a temporary suspension, expires.

I can’t—I just can’t really give you much of a running commentary in the meantime, I’m afraid.

GOLODRYGA: Before we wrap up, just a quick—any updates on the investigation into Sheryl Sandberg using company resources for personal purposes?

CLEGG: I think—sorry, I don’t think she did.

GOLODRYGA: That was the allegations—reporting. No?

CLEGG: I don’t—so I—I don’t—no, she, certainly, didn’t. So I don’t think it’s an open issue because I think it’s all been resolved.

GOLODRYGA: OK. Quickly, one less of one minute.

Q: Thanks. Hi. I’m Yael Eisenstat. I’m at the Anti-Defamation League. I head the Center for Technology and Society.

You’re probably aware I was at Facebook in 2018 as the global head of elections integrity ops for political ads. I will not go through all the things I disagree with here. But to follow up on David, would love to see you share the stage with critics at some point.

You do a lot of one-on-one interviews. You and I were supposed to testify in front of the European Parliamentary Commission together. You did not show up. I would just love to see you actually speak with people on the stage who can then say, well, this is why I disagree. Because this is not the venue for that, unfortunately. So I invite you to do that.

CLEGG: Sorry. I find it somewhat peculiar that I’m here talking to you and then I’m now accused of not being here talking to you.

I mean, look—no—look, firstly, I have a day job. So, secondly, we, as a company now—totally right—with success and size comes responsibility and scrutiny. We now, in my view, never act perfectly but have institutions from the oversight board, have published data like no others do, which is independently audited, have put—since the six months or so that you worked in the company in 2018 they’ve completely transformed.

We’ve completely transformed the way—the guardrails that you put in place for elections, including language coverage, including in the midterms, for instance, not running new political ads for the last week.

You may not like this but it seems to me we’re entitled to put our alternative case—you will never agree with it—in the best format that we think that we can. That’s why I’m here. That’s why we publish all that data. That’s why we have all that data audited. That’s why we have these independent entities like the oversight board in the way that no one else does.

I think you can say many, many things about Facebook. The idea that the critics don’t have their say is not one of them, I think.

With that, many thanks very much, indeed.

GOLODRYGA: Thank you, everyone. Have a good weekend. (Applause.)

(END)

This is an uncorrected transcript.

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.